An Introduction to Causal Inference by Judea Pearl
Author:Judea Pearl [Pearl, Judea]
Language: eng
Format: epub
Tags: structural equation models, confounding, graphical methods, counterfactuals, causal effects, potential-outcome, mediation, policy evaluation, causes of effects
Published: 2014-04-29T00:00:00+00:00
5.2. Problem formulation and the demystification of “ignorability”
The main drawback of this black-box approach surfaces in problem formulation, namely, the phase where a researcher begins to articulate the “science” or “causal assumptions” behind the problem of interest. Such knowledge, as we have seen in Section 1, must be articulated at the onset of every problem in causal analysis – causal conclusions are only as valid as the causal assumptions upon which they rest.
To communicate scientific knowledge, the potential-outcome analyst must express assumptions as constraints on P*, usually in the form of conditional independence assertions involving counterfactual variables. For instance, in our example of Fig. 5, to communicate the understanding that Z is randomized (hence independent of UX and UY), the potential-outcome analyst would use the independence constraint Z⊥⊥{Yz1, Yz2, . . .,Yzk}.14 To further formulate the understanding that Z does not affect Y directly, except through X, the analyst would write a, so called, “exclusion restriction”: Yxz = Yx.
A collection of constraints of this type might sometimes be sufficient to permit a unique solution to the query of interest. For example, if one can plausibly assume that, in Fig. 4, a set Z of covariates satisfies the conditional independence
(35) Yx ⊥ ⊥ X|Z
(an assumption termed “conditional ignorability” by Rosenbaum and Rubin (1983),) then the causal effect P(y|do(x)) = P*(Yx = y) can readily be evaluated to yield
(36)
The last expression contains no counterfactual quantities (thus permitting us to drop the asterisk from P*) and coincides precisely with the standard covariate-adjustment formula of Eq. (25).
We see that the assumption of conditional ignorability (35) qualifies Z as an admissible covariate for adjustment; it mirrors therefore the “back-door” criterion of Definition 3, which bases the admissibility of Z on an explicit causal structure encoded in the diagram.
The derivation above may explain why the potential-outcome approach appeals to mathematical statisticians; instead of constructing new vocabulary (e.g., arrows), new operators (do(x)) and new logic for causal analysis, almost all mathematical operations in this framework are conducted within the safe confines of probability calculus. Save for an occasional application of rule (34) or (32)), the analyst may forget that Yx stands for a counterfactual quantity—it is treated as any other random variable, and the entire derivation follows the course of routine probability exercises.
This orthodoxy exacts a high cost: Instead of bringing the theory to the problem, the problem must be reformulated to fit the theory; all background knowledge pertaining to a given problem must first be translated into the language of counterfactuals (e.g., ignorability conditions) before analysis can commence. This translation may in fact be the hardest part of the problem. The reader may appreciate this aspect by attempting to judge whether the assumption of conditional ignorability (35), the key to the derivation of (36), holds in any familiar situation, say in the experimental setup of Fig. 2(a). This assumption reads: “the value that Y would obtain had X been x, is independent of X, given Z”. Even the most experienced potential-outcome expert would be unable to discern whether any subset Z of covariates in Fig.
Download
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.
Modelling of Convective Heat and Mass Transfer in Rotating Flows by Igor V. Shevchuk(6434)
Weapons of Math Destruction by Cathy O'Neil(6267)
Factfulness: Ten Reasons We're Wrong About the World – and Why Things Are Better Than You Think by Hans Rosling(4737)
A Mind For Numbers: How to Excel at Math and Science (Even If You Flunked Algebra) by Barbara Oakley(3302)
Descartes' Error by Antonio Damasio(3272)
Factfulness_Ten Reasons We're Wrong About the World_and Why Things Are Better Than You Think by Hans Rosling(3235)
TCP IP by Todd Lammle(3180)
Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets by Nassim Nicholas Taleb(3112)
Applied Predictive Modeling by Max Kuhn & Kjell Johnson(3067)
The Tyranny of Metrics by Jerry Z. Muller(3067)
The Book of Numbers by Peter Bentley(2966)
The Great Unknown by Marcus du Sautoy(2691)
Once Upon an Algorithm by Martin Erwig(2644)
Easy Algebra Step-by-Step by Sandra Luna McCune(2628)
Lady Luck by Kristen Ashley(2577)
Practical Guide To Principal Component Methods in R (Multivariate Analysis Book 2) by Alboukadel Kassambara(2541)
Police Exams Prep 2018-2019 by Kaplan Test Prep(2540)
All Things Reconsidered by Bill Thompson III(2389)
Linear Time-Invariant Systems, Behaviors and Modules by Ulrich Oberst & Martin Scheicher & Ingrid Scheicher(2364)